我们考虑了学习eoiSodic安全控制政策的问题,这最小化了客观函数,同时满足必要的安全约束 - 都在学习和部署期间。我们使用具有未知转换概率函数的有限范围限制的Markov决策过程(CMDP)的有限范围限制的Markov决策过程(CMDP)制定了这种安全约束的强化学习(RL)问题。在这里,我们将安全要求造型为关于在所有学习集中必须满足的预期累计成本的限制。我们提出了一种基于模型的安全RL算法,我们称之为乐观 - 悲观的安全强化学习(OPSRL)算法,并表明它实现了$ \ TINDE {\ MATHCAL {O}}(S ^ {2} \ SQRT {啊^ {7} k} /(\ bar {c} - \ bar {c} _ {b}))$累积遗憾在学习期间没有违反安全限制,其中$ S $是州的数量,$ a $动作数量,$ H $是地平线长度,$ k $是学习剧集的数量,$(\ bar {c} - \ bar {c} _ {b})$是安全差距,即,约束值与已知安全基线政策的成本之间的差异。缩放为$ \ tilde {\ mathcal {o}}(\ sqrt {k})$与学习期间可能违反约束的传统方法相同,这意味着我们的算法尽管提供了一个额外的遗憾安全保证。我们的主要思想是利用乐观的探索方法,以悲观的约束实施来学习政策。这种方法同时激励了未知国家的探索,同时对访问可能违反安全限制的国家施加罚款。我们通过对传统方法的基准问题进行评估来验证我们的算法。
translated by 谷歌翻译
Post-hoc explanation methods have become increasingly depended upon for understanding black-box classifiers in high-stakes applications, precipitating a need for reliable explanations. While numerous explanation methods have been proposed, recent works have shown that many existing methods can be inconsistent or unstable. In addition, high-performing classifiers are often highly nonlinear and can exhibit complex behavior around the decision boundary, leading to brittle or misleading local explanations. Therefore, there is an impending need to quantify the uncertainty of such explanation methods in order to understand when explanations are trustworthy. We introduce a novel uncertainty quantification method parameterized by a Gaussian Process model, which combines the uncertainty approximation of existing methods with a novel geodesic-based similarity which captures the complexity of the target black-box decision boundary. The proposed framework is highly flexible; it can be used with any black-box classifier and feature attribution method to amortize uncertainty estimates for explanations. We show theoretically that our proposed geodesic-based kernel similarity increases with the complexity of the decision boundary. Empirical results on multiple tabular and image datasets show that our decision boundary-aware uncertainty estimate improves understanding of explanations as compared to existing methods.
translated by 谷歌翻译
机器学习方法在做出预测方面越来越好,但与此同时,它们也变得越来越复杂和透明。结果,通常依靠解释器为这些黑框预测模型提供可解释性。作为关键的诊断工具,重要的是这些解释者本身是可靠的。在本文中,我们关注可靠性的一个特定方面,即解释器应为类似的数据输入提供类似的解释。我们通过引入和定义解释者的敏锐度,类似于分类器的敏锐性来形式化这个概念。我们的形式主义灵感来自概率lipchitzness的概念,该概念捕捉了功能局部平滑度的可能性。对于各种解释者(例如,摇摆,上升,CXPLAIN),我们为这些解释者的敏锐性提供了下限保证,鉴于预测函数的lipschitzness。这些理论上的结果表明,局部平滑的预测函数为局部强大的解释提供了自身。我们对模拟和真实数据集进行经验评估这些结果。
translated by 谷歌翻译
A key challenge in social network analysis is understanding the position, or stance, of people in the graph on a large set of topics. While past work has modeled (dis)agreement in social networks using signed graphs, these approaches have not modeled agreement patterns across a range of correlated topics. For instance, disagreement on one topic may make disagreement(or agreement) more likely for related topics. We propose the Stance Embeddings Model(SEM), which jointly learns embeddings for each user and topic in signed social graphs with distinct edge types for each topic. By jointly learning user and topic embeddings, SEM is able to perform cold-start topic stance detection, predicting the stance of a user on topics for which we have not observed their engagement. We demonstrate the effectiveness of SEM using two large-scale Twitter signed graph datasets we open-source. One dataset, TwitterSG, labels (dis)agreements using engagements between users via tweets to derive topic-informed, signed edges. The other, BirdwatchSG, leverages community reports on misinformation and misleading content. On TwitterSG and BirdwatchSG, SEM shows a 39% and 26% error reduction respectively against strong baselines.
translated by 谷歌翻译